Latest forecasts

Forecasts of cases/deaths per week per 100,000. Click the Forecast tab above to view all past forecasts.

Cases

No forecasts available, possibly because of recent anomalies in the available data.

Deaths

Predictive performance

In the following plots, the panels on the left show the skill of the model compared to the ensemble and baseline models, and the panels on the right the corresponding forecasts.

Skill is shown as weighted interval score.

Cases

1 week ahead horizon

Overall predictive performance

Overall scores are only created for models that were used for forecasts in each of the last 4 weeks, excluding periods during which there were anomalies in the data. At the moment the model does not fulfill that criterion.

Performance over time

No recent forecasts available targeting the last 10 weeks.

2 weeks ahead horizon

Overall predictive performance

Overall scores are only created for models that were used for forecasts in each of the last 4 weeks, excluding periods during which there were anomalies in the data. At the moment the model does not fulfill that criterion.

Performance over time

No recent forecasts available targeting the last 10 weeks.

3 weeks ahead horizon

Overall predictive performance

Overall scores are only created for models that were used for forecasts in each of the last 4 weeks, excluding periods during which there were anomalies in the data. At the moment the model does not fulfill that criterion.

Performance over time

No recent forecasts available targeting the last 10 weeks.

4 weeks ahead horizon

Overall predictive performance

Overall scores are only created for models that were used for forecasts in each of the last 4 weeks, excluding periods during which there were anomalies in the data. At the moment the model does not fulfill that criterion.

Performance over time

No recent forecasts available targeting the last 10 weeks.

Deaths

1 week ahead horizon

Overall predictive performance

The table shows the skill of the UMass-MechBayes model relative to the baseline (vs. baseline) and the ensemble model (vs. ensemble) for the past 10 weeks in terms of the Weighted Interval Score (WIS) and Absolute Error (AE). Values less than 1 indicate the model performs better than the model it is being compared with, and values greater than 1 that it performs worse. WIS values are only shown if all 23 quantiles are provided.

Performance over time

2 weeks ahead horizon

Overall predictive performance

The table shows the skill of the UMass-MechBayes model relative to the baseline (vs. baseline) and the ensemble model (vs. ensemble) for the past 10 weeks in terms of the Weighted Interval Score (WIS) and Absolute Error (AE). Values less than 1 indicate the model performs better than the model it is being compared with, and values greater than 1 that it performs worse. WIS values are only shown if all 23 quantiles are provided.

Performance over time

3 weeks ahead horizon

Overall predictive performance

The table shows the skill of the UMass-MechBayes model relative to the baseline (vs. baseline) and the ensemble model (vs. ensemble) for the past 10 weeks in terms of the Weighted Interval Score (WIS) and Absolute Error (AE). Values less than 1 indicate the model performs better than the model it is being compared with, and values greater than 1 that it performs worse. WIS values are only shown if all 23 quantiles are provided.

Performance over time

4 weeks ahead horizon

Overall predictive performance

The table shows the skill of the UMass-MechBayes model relative to the baseline (vs. baseline) and the ensemble model (vs. ensemble) for the past 10 weeks in terms of the Weighted Interval Score (WIS) and Absolute Error (AE). Values less than 1 indicate the model performs better than the model it is being compared with, and values greater than 1 that it performs worse. WIS values are only shown if all 23 quantiles are provided.

Performance over time

Forecast calibration

The plots below describe the calibration of the model, that is its ability to correctly quantify its uncertainty, across all predicted countries.

Overall coverage

Coverage is the proportion of observations that fall within a given prediction interval. Ideally, a forecast model would achieve 50% coverage of 0.50 (i.e., 50% of observations fall within the 50% prediction interval) and 95% coverage of 0.95 (i.e., 95% of observations fall within the 95% prediction interval), incidcated by the dashed horizontal lines below. Values of coverage greater than these nominal values indicate that the forecasts are underconfident, i.e. prediction intervals tend to be too wide, whereas values of coverage smaller than these nominal values indicate that the ensemble forecasts are overconfident, i.e. prediction intervals tend to be too narrow.

PIT histograms

The figures below are PIT histograms for the all past forecasts. These show the proportion of true values within each predictive quantile (width: 0.1). If the forecasts were perfectly calibrated, observations would fall evenly across these equally-spaced quantiles, i.e. the histograms would be flat.